Goto

Collaborating Authors

 current state


World Models: 10 Things That Matter in AI Right Now

MIT Technology Review

Join a subscriber-only discussion live on Thursday, May 21. A woman's uterus has been kept alive outside the body for the first time Jessica Hamzelou Want to understand the current state of AI? Check out these charts. A woman's uterus has been kept alive outside the body for the first time The team behind the feat plan to study uterine disorders and the early stages of pregnancy--and potentially grow a human fetus. Want to understand the current state of AI? Check out these charts. According to Stanford's 2026 AI Index, AI is sprinting, and we're struggling to keep up. The ultimate plan to live forever is a brand new body.


Three things in AI to watch, according to a Nobel-winning economist

MIT Technology Review

Daron Acemoglu is more cautious than most about predictions of a jobs apocalypse. A few months before he was awarded the Nobel Prize in economics in 2024, Daron Acemoglu published a paper that earned him few fans in Silicon Valley. Contrary to what Big Tech CEOs had been promising--an overhaul of all white-collar work--Acemoglu estimated that AI would give only a small boost to US productivity and would not obviate the need for human work. It's okay at automating certain tasks, he wrote, but some jobs will be perfectly fine. Two years later, Acemoglu's measured take has not caught on. Chatter about an AI jobs apocalypse pops up everywhere from Senator Bernie Sanders's rallies to conversations I overhear in line at the grocery store.


Implementing advanced AI technologies in finance

MIT Technology Review

Successful AI implementation requires shifts in workplace culture as well as use cases that can scale across the enterprise. In finance departments that have long been defined by precision and control, AI has arrived less as a neatly managed upgrade than as a quiet insurgency. Employees are already using it while leadership races to impose structure, governance, and strategy after the fact. The result is a paradox: one of the most tightly regulated functions in the enterprise is now among the most experimentally transformed. What's emerging is a layered shift in how work gets done. From variance commentary and fraud detection to contract review and close narrative drafting, AI is embedding itself across workflows, particularly where unstructured data once slowed down everything.


Cyber-Insecurity in the AI Era

MIT Technology Review

Cybersecurity was already under strain before AI entered the stack. Now, as AI expands the attack surface and adds new complexity, the limits of legacy approaches are becoming harder to ignore. This session from MIT Technology Review's EmTech AI conference explores why security must be rethought with AI at its core, not layered on after the fact. A prolific inventor and internationally recognized authority in knowledge representation, inference calculus, and AI planning, Tarique has spent his career applying autonomously collaborative AI to solve complex, ultra-high-scale challenges across cybersecurity, data security, and compliance -- with deep expertise spanning Data Classification, DLP, and DSPM industries. His groundbreaking innovations and multiple USPTO patents have earned him global recognition, including frequent invitations to deliver keynote addresses at prestigious international security conferences and forums. At GCCybersecurity, Tarique architected the core AI algorithms powering the company's 4th and 5th generation fully autonomous data leak protection and exfiltration platform -- among the most advanced platform of its kind.


This startup's new mechanistic interpretability tool lets you debug LLMs

MIT Technology Review

This startup's new mechanistic interpretability tool lets you debug LLMs Goodfire wants to make training AI models more like good old-fashioned software engineering. The San Francisco-based startup Goodfire just released a new tool, called Silico, that lets researchers and engineers peer inside an AI model and adjust its parameters--the settings that determine a model's behavior --during training. This could give model makers more fine-grained control over how this technology is built than was once thought possible. Goodfire claims Silico is the first off-the-shelf tool of its kind that can help developers debug all stages of the development process, from building a data set to training a model. LLMs contain a LOT of parameters. The company says its mission is to make building AI models less like alchemy and more like a science.


Roundtables: Unveiling The 10 Things That Matter in AI Right Now

MIT Technology Review

Watch subscriber-only discussion unveiling a new list capturing 10 key technologies in AI that you need to know about in 2026. Subscribers saw a special edition of Roundtables simulcast live from EmTech AI, MIT Technology Review's signature conference for AI leadership. Subscribers got an exclusive first look at a new list capturing 10 key technologies, emerging trends, bold ideas, and powerful movements in AI that you need to know about in 2026. Grace Huckins, AI reporter, hosted this session as Amy Nordrum and Niall Firth, executive editors, unveiled the list onstage. Want to understand the current state of AI? Check out these charts. Exclusive: Niantic's AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.


This tool could show how consciousness works

MIT Technology Review

Transcranial focused ultrasound is a noninvasive way to stimulate the brain and see how it functions. How does the physical matter in our brains translate into thoughts, sensations, and emotions? It's hard to explore that question without neurosurgery. But in a recent paper, MIT philosopher Matthias Michel, Lincoln Lab researcher Daniel Freeman, and colleagues outline a strategy for doing so with an emerging tool called transcranial focused ultrasound. This noninvasive technology reaches deeper into the brain, with greater resolution, than techniques such as EEG and MRI. It works by sending acoustic waves through the skull to focus on an area of a few millimeters, allowing specific brain structures to be stimulated so the effects can be studied.


Sparse Attentive Backtracking: Temporal Credit Assignment Through Reminding

Neural Information Processing Systems

Learning long-term dependencies in extended temporal sequences requires credit assignment to events far back in the past. The most common method for training recurrent neural networks, back-propagation through time (BPTT), requires credit information to be propagated backwards through every single step of the forward computation, potentially over thousands or millions of time steps. This becomes computationally expensive or even infeasible when used with long sequences. Importantly, biological brains are unlikely to perform such detailed reverse replay over very long sequences of internal states (consider days, months, or years.) However, humans are often reminded of past memories or mental states which are associated with the current mental state. We consider the hypothesis that such memory associations between past and present could be used for credit assignment through arbitrarily long sequences, propagating the credit assigned to the current state to the associated past state. Based on this principle, we study a novel algorithm which only back-propagates through a few of these temporal skip connections, realized by a learned attention mechanism that associates current states with relevant past states. We demonstrate in experiments that our method matches or outperforms regular BPTT and truncated BPTT in tasks involving particularly long-term dependencies, but without requiring the biologically implausible backward replay through the whole history of states. Additionally, we demonstrate that the proposed method transfers to longer sequences significantly better than LSTMs trained with BPTT and LSTMs trained with full self-attention.